Goto

Collaborating Authors

 racist algorithm


[Opinion] Racist algorithms and AI can't determine EU migration policy

#artificialintelligence

We have visited high-tech refugee camps in Greece, seen violent borders all over Europe, and spoken with hundreds of people who are at the sharp end of technologically-assisted brutality. AI in migration is increasingly used to make predictions, assessments, and evaluations based on racist assumptions it is programmed with. But with upcoming, legislation to regulate Artificial Intelligence (the EU"s "AI Act") the EU has a chance to live its self-proclaimed values, set a global standard and draw red lines on the most harmful technologies. Politicians have turned migration into a political weapon and the EU's policies are becoming increasingly violent: hardening of borders, increased deportation, empowering agencies like Frontex which have been repeatedly implicated in severe human rights abuses, and even condoning the arrest and incarceration of search-and-rescue volunteers, doctors, lawyers, and journalists. Increasingly, surveillance and automated technologies are being tested out at borders and in migration procedures -- with people seeking safety being treated as guinea pigs.


Twitter's racist algorithm is also ageist, ableist and Islamaphobic, researchers find

#artificialintelligence

The same artificial intelligence had learned to ignore people with white or … AI Group, which studies and consults on biases in artificial intelligence.


AI researchers say scientific publishers help perpetuate racist algorithms

MIT Technology Review

The news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled "A Deep Neural Network Model to Predict Criminality Using Image Processing," presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release. It was developed by researchers at Harrisburg University and was due to be presented at a forthcoming conference. The demands: Citing the work of leading Black AI scholars, the letter debunks the scientific basis of the paper and asserts that crime-prediction technologies are racist. It also lists three demands: 1) for Springer Nature to rescind its offer to publish the study; 2) for it to issue a statement condemning the use of statistical techniques such as machine learning to predict criminality and acknowledging its role in incentivizing such research; and 3) for all scientific publishers to commit to not publishing similar papers in the future.


Racist algorithms: how Big Data makes bias seem objective

#artificialintelligence

The Ford Foundation's Michael Brennan discusses the many studies showing how algorithms can magnify bias -- like the prevalence of police background check ads shown against searches for black names. What's worse is the way that machine learning magnifies these problems. If an employer only hires young applicants, a machine learning algorithm will learn to screen out all older applicants without anyone having to tell it to do so. Worst of all is that the use of algorithms to accomplish this discrimination provides a veneer of objective respectability to racism, sexism and other forms of discrimination. I recently attended a meeting about some preliminary research on "predictive policing," which uses these machine learning algorithms to allocate police resources to likely crime hotspots.


Artificial intelligence: How to avoid racist algorithms

BBC News

There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of people. The result can be that the decision-making becomes inherently biased, albeit accidentally. Try searching online for an image of "hands" or "babies" using any of the big search engines and you are likely to find largely white results. In 2015, graphic designer Johanna Burai created the World White Web project after searching for an image of human hands and finding exclusively white hands in the top image results on Google. Her website offers "alternative" hand pictures that can be used by content creators online to redress the balance and thus be picked up by the search engine.


Can Computers be Racist? Only if a Person Develops the Racist Algorithms.

#artificialintelligence

Dr. Latanya Sweeney explains how Racism gets Automatically Baked into the Algorithms that Quietly Shape Our World. Computers like children, are not born or in the case of the computer, created, racist, prejudice, sexist, hateful, etc. The person(s) who develop the Artificial Intelligence – A.I., i.e., Machine Learning – ML, Cognitive Computing, Natural Language Processing – NLP, etc., algorithms can either code the computer to be Racist, Prejudice, Sexist, Hateful, etc., or not. This algorithm forms the very foundation upon how that A.I. operates from that time forward, just as parents, teachers, communities, and friends can form the foundation early on, for how you operate, interact, behave, etc.